Goto

Collaborating Authors

 Software


1 Motivation 1.1 For what purpose was the dataset created?

Neural Information Processing Systems

DFT ROT CONSTANT A the rotational constant A (1/cm); DFT ROT CONSTANT B the rotational constant B (1/cm); DFT ROT CONSTANT C the rotational constant C (1/cm); DFT HOMO: energy of the highest occupied orbital (Hartree); DFT LUMO energy of the lowest unoccupied orbital (Hartree); DFT HOMO-LUMO GAP: is the energy gap between the highest occupied and lowest unoccupied molecular orbital (Hartree); DFT ATOMIC ENERGY: total energy of single atoms (Hartree); DFT FORMATION ENERGY: is a total energy minus atomic energy (Hartree).


MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens Anas Awadalla 1,2 Le Xue 2 Oscar Lo1

Neural Information Processing Systems

Multimodal interleaved datasets featuring free-form interleaved sequences of images and text are crucial for training frontier large multimodal models (LMMs). Despite the rapid progression of open-source LMMs, there remains a pronounced scarcity of large-scale, open-source multimodal interleaved datasets. In response, we introduce MINT-1T, the most extensive and diverse open-source Multimodal INTerleaved dataset to date. MINT-1T comprises of one trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. As scaling multimodal interleaved datasets requires substantial engineering effort, sharing the data curation process and releasing the dataset greatly benefits the community. Our experiments show that LMMs trained on MINT-1T rival the performance of models trained on the previous leading dataset, OBELICS.


Convexity Certificates from Hessians (Supplementary Material)

Neural Information Processing Systems

Here, we (1) provide the grammar for the formal language of mathematical expressions to which our certification algorithm is applied, (2) we provide more algorithmic details about our implementation of the Hessian approach, (3) we show that our implementation of the Hessian approach can also certify the remaining differentiable CVX atoms with vector input, which we could not discuss in the main paper because of space constraints, and (4) we provide more examples of differentiable functions that can be certified by the Hessian approach but are missing from CVX's DCP implementation. The formal language for mathematical expressions to which our certification algorithm is applied is specified by the grammar depicted in Figure 1. The language is rich enough to cover all the examples in the main paper and this supplement. Figure 1: EBNF Grammar for mathematical expressions supported by our approach. In this grammar, number is a placeholder for an arbitrary floating point number, variable is a placeholder for variable names starting with a Latin character and function is a placeholder for the supported elementary differentiable functions like exp, log and sum.


Rebecca Saul

Neural Information Processing Systems

Binary analysis is a core component of many critical security tasks, including reverse engineering, malware analysis, and vulnerability detection. Manual analysis is often time-consuming, but identifying commonly-used or previously-seen functions can reduce the time it takes to understand a new file. However, given the complexity of assembly, and the NP-hard nature of determining function equivalence, this task is extremely difficult. Common approaches often use sophisticated disassembly and decompilation tools, graph analysis, and other expensive preprocessing steps to perform function similarity searches over some corpus. In this work, we identify a number of discrepancies between the current research environment and the underlying application need.


GlotCC: An Open Broad-Coverage CommonCrawl Corpus and Pipeline for Minority Languages Franรงois Yvon

Neural Information Processing Systems

The need for large text corpora has increased with the advent of pretrained language models and, in particular, the discovery of scaling laws for these models. Most available corpora have sufficient data only for languages with large dominant communities. However, there is no corpus available that (i) covers a wide range of minority languages; (ii) is generated by an open-source reproducible pipeline; and (iii) is rigorously cleaned from noise, making it trustworthy to use. We present GlotCC, a clean, document-level, 2TB general domain corpus derived from CommonCrawl, covering more than 1000 languages. We make GlotCC and the system used to generate it-- including the pipeline, language identification model, and filters--available to the research community.


Learning Graph Structure With A Finite-State Automaton Layer

Neural Information Processing Systems

Graph-based neural network models are producing strong results in a number of domains, in part because graphs provide flexibility to encode domain knowledge in the form of relational structure (edges) between nodes in the graph. In practice, edges are used both to represent intrinsic structure (e.g., abstract syntax trees of programs) and more abstract relations that aid reasoning for a downstream task (e.g., results of relevant program analyses). In this work, we study the problem of learning to derive abstract relations from the intrinsic graph structure. Motivated by their power in program analyses, we consider relations defined by paths on the base graph accepted by a finite-state automaton. We show how to learn these relations end-to-end by relaxing the problem into learning finite-state automata policies on a graph-based POMDP and then training these policies using implicit differentiation. The result is a differentiable Graph Finite-State Automaton (GFSA) layer that adds a new edge type (expressed as a weighted adjacency matrix) to a base graph. We demonstrate that this layer can find shortcuts in grid-world graphs and reproduce simple static analyses on Python programs. Additionally, we combine the GFSA layer with a larger graph-based model trained end-to-end on the variable misuse program understanding task, and find that using the GFSA layer leads to better performance than using hand-engineered semantic edges or other baseline methods for adding learned edge types.


Breaking the Linear Iteration Cost Barrier for Some Well-known Conditional Gradient Methods Using MaxIP Data-structures

Neural Information Processing Systems

Conditional gradient methods (CGM) are widely used in modern machine learning. CGM's overall running time usually consists of two parts: the number of iterations and the cost of each iteration. Most efforts focus on reducing the number of iterations as a means to reduce the overall running time. In this work, we focus on improving the per iteration cost of CGM. The bottleneck step in most CGM is maximum inner product search (MaxIP), which requires a linear scan over the parameters. In practice, approximate MaxIP data-structures are found to be helpful heuristics. However, theoretically, nothing is known about the combination of approximate MaxIP data-structures and CGM. In this work, we answer this question positively by providing a formal framework to combine the locality sensitive hashing type approximate MaxIP data-structures with CGM algorithms. As a result, we show the first algorithm, where the cost per iteration is sublinear in the number of parameters, for many fundamental optimization algorithms, e.g., Frank-Wolfe, Herding algorithm, and policy gradient.



Tangent: Automatic differentiation using source-code transformation for dynamically typed array programming

Neural Information Processing Systems

The need to efficiently calculate first-and higher-order derivatives of increasingly complex models expressed in Python has stressed or exceeded the capabilities of available tools. In this work, we explore techniques from the field of automatic differentiation (AD) that can give researchers expressive power, performance and strong usability. These include source-code transformation (SCT), flexible gradient surgery, efficient in-place array operations, and higher-order derivatives. We implement and demonstrate these ideas in the Tangent software library for Python, the first AD framework for a dynamic language that uses SCT.


Autoconj: Recognizing and Exploiting Conjugacy Without a Domain-Specific Language

Neural Information Processing Systems

Deriving conditional and marginal distributions using conjugacy relationships can be time consuming and error prone. In this paper, we propose a strategy for automating such derivations. Unlike previous systems which focus on relationships between pairs of random variables, our system (which we call Autoconj) operates directly on Python functions that compute log-joint distribution functions.